当许多松散相关的未标记数据可用并且稀缺标记的数据时,机器智能的范式从纯粹的监督学习转变为更实用的情况。大多数现有算法都假定基础任务分布是固定的。在这里,我们考虑了随着时间的推移,该任务分布中的一个更现实和具有挑战性的环境会不断发展。我们将这个问题称为半监督的元学习,并具有不断发展的任务分布,缩写为集合。在这种更现实的环境中出现了两个关键挑战:(i)在存在大量未标记的分发(OOD)数据的情况下,如何使用未标记的数据; (ii)如何防止由于任务分配转移而导致先前学习的任务分布的灾难性遗忘。我们提出了一种强大的知识和知识保留的半监督元学习方法(秩序),以应对这两个主要挑战。具体而言,我们的订单引入了一种新型的共同信息正则化,以使用未标记的OOD数据鲁棒化模型,并采用最佳的运输正规化来记住以前在特征空间中学习的知识。此外,我们在一个非常具有挑战性的数据集上测试我们的方法:大规模非平稳的半监督任务分布的集合,该任务分布由(至少)72K任务组成。通过广泛的实验,我们证明了拟议的订单减轻了忘记不断发展的任务分布,并且对OOD数据比相关的强基础更强大。
translated by 谷歌翻译
无示例性课程学习(CIL)是一个具有挑战性的问题,因为严格禁止从先前阶段进行排练数据,从而导致灾难性忘记深度神经网络(DNNS)。在本文中,我们提出了ivoro,这是CIL的整体框架,源自计算几何形状。我们发现Voronoi图(VD)是一个用于空间细分的经典模型,对于解决CIL问题特别有力,因为VD本身可以以增量的方式构建好 - 新添加的站点(类)只会影响接近的类别,使非连续课程几乎无法忘记。此外,为了找到用于VD构建的更好的中心,我们使用功率图与VD串联DNN,并证明可以通过使用除法和争议算法集成本地DNN模型来优化VD结构。此外,我们的VD结构不限于深度特征空间,而是适用于多个中间特征空间,将VD推广为多中心VD(CIVD),可有效捕获DNN的多元元素特征。重要的是,Ivoro还能够处理不确定性感知的测试时间Voronoi细胞分配,并且在几何不确定性和预测精度之间表现出很高的相关性(高达〜0.9)。与最先进的非exemememplar CIL方法相比,Ivoro将所有内容汇总在一起,分别在CIFAR-100,Tinyimagenet和Imagenet-Subsset上获得了高达25.26%,37.09%和33.21%的改善。总之,Ivoro可以实现高度准确,保护隐私和几何解释的CIL,当禁止使用跨相数据共享时,这特别有用,例如在医疗应用中。我们的代码可在https://machunwei.github.io/ivoro上找到。
translated by 谷歌翻译
无任务持续学习(CL)旨在学习非平稳数据流,而无需明确的任务定义,不要忘记以前的知识。广泛采用的内存重播方法可能会逐渐对长数据流有效,因为该模型可能会记住存储的示例并过度拟合内存缓冲区。其次,现有方法忽略了内存数据分布的高不确定性,因为内存数据分布与所有先前数据示例的分布之间存在很大差距。为了解决这些问题,我们首次提出了一个原则的内存演进框架,以使内存缓冲区逐渐难以通过分布强大的优化(DRO)来动态发展内存数据分布。然后,我们得出了一个方法家族,以通过Wasserstein梯度流(WGF)在连续概率中进化内存缓冲区数据。所提出的DRO是W.R.T最糟糕的记忆数据分布,因此保证了模型性能,并且比现有基于内存重新播放的方法更加可靠的功能。对现有基准测试的广泛实验证明了拟议方法减轻遗忘的有效性。作为拟议框架的副产品,与现有的无任务CL方法相比,我们的方法对对抗性示例更强大。
translated by 谷歌翻译
由于客户端之间标签不平衡的普遍性,联邦对抗域适应是一种独特的分布式Minimax培训任务,每个客户端只看到培训全局模型所需的标签类的子集。为了解决这个问题,我们提出了一个分布式Minimax优化器,称为FEDMM,专为联邦对抗域适应问题而设计。即使在每个客户端具有不同的标签类,某些客户端只有无监督的任务,它也运作良好。我们证明了FEDMM确保将达到域移位无监督数据的静止点收敛。在各种基准数据集中,广泛的实验表明,基于梯度下降升降算法例如,当从头划伤训练时,它以相同的通信回合占据了其他基于GDA的联合平均方法的准确性约为20%;当从预先训练的模型培训时,它始终如一地优于不同网络的5.4 \%$ 9 \%$ 9 \%$。
translated by 谷歌翻译
In this paper, we study the problem of knowledge-intensive text-to-SQL, in which domain knowledge is necessary to parse expert questions into SQL queries over domain-specific tables. We formalize this scenario by building a new Chinese benchmark KnowSQL consisting of domain-specific questions covering various domains. We then address this problem by presenting formulaic knowledge, rather than by annotating additional data examples. More concretely, we construct a formulaic knowledge bank as a domain knowledge base and propose a framework (ReGrouP) to leverage this formulaic knowledge during parsing. Experiments using ReGrouP demonstrate a significant 28.2% improvement overall on KnowSQL.
translated by 谷歌翻译
Weakly-supervised object localization aims to indicate the category as well as the scope of an object in an image given only the image-level labels. Most of the existing works are based on Class Activation Mapping (CAM) and endeavor to enlarge the discriminative area inside the activation map to perceive the whole object, yet ignore the co-occurrence confounder of the object and context (e.g., fish and water), which makes the model inspection hard to distinguish object boundaries. Besides, the use of CAM also brings a dilemma problem that the classification and localization always suffer from a performance gap and can not reach their highest accuracy simultaneously. In this paper, we propose a casual knowledge distillation method, dubbed KD-CI-CAM, to address these two under-explored issues in one go. More specifically, we tackle the co-occurrence context confounder problem via causal intervention (CI), which explores the causalities among image features, contexts, and categories to eliminate the biased object-context entanglement in the class activation maps. Based on the de-biased object feature, we additionally propose a multi-teacher causal distillation framework to balance the absorption of classification knowledge and localization knowledge during model training. Extensive experiments on several benchmarks demonstrate the effectiveness of KD-CI-CAM in learning clear object boundaries from confounding contexts and addressing the dilemma problem between classification and localization performance.
translated by 谷歌翻译
Dynamic treatment regimes assign personalized treatments to patients sequentially over time based on their baseline information and time-varying covariates. In mobile health applications, these covariates are typically collected at different frequencies over a long time horizon. In this paper, we propose a deep spectral Q-learning algorithm, which integrates principal component analysis (PCA) with deep Q-learning to handle the mixed frequency data. In theory, we prove that the mean return under the estimated optimal policy converges to that under the optimal one and establish its rate of convergence. The usefulness of our proposal is further illustrated via simulations and an application to a diabetes dataset.
translated by 谷歌翻译
Nowadays, time-stamped web documents related to a general news query floods spread throughout the Internet, and timeline summarization targets concisely summarizing the evolution trajectory of events along the timeline. Unlike traditional document summarization, timeline summarization needs to model the time series information of the input events and summarize important events in chronological order. To tackle this challenge, in this paper, we propose a Unified Timeline Summarizer (UTS) that can generate abstractive and extractive timeline summaries in time order. Concretely, in the encoder part, we propose a graph-based event encoder that relates multiple events according to their content dependency and learns a global representation of each event. In the decoder part, to ensure the chronological order of the abstractive summary, we propose to extract the feature of event-level attention in its generation process with sequential information remained and use it to simulate the evolutionary attention of the ground truth summary. The event-level attention can also be used to assist in extracting summary, where the extracted summary also comes in time sequence. We augment the previous Chinese large-scale timeline summarization dataset and collect a new English timeline dataset. Extensive experiments conducted on these datasets and on the out-of-domain Timeline 17 dataset show that UTS achieves state-of-the-art performance in terms of both automatic and human evaluations.
translated by 谷歌翻译
Hybrid unmanned aerial vehicles (UAVs) integrate the efficient forward flight of fixed-wing and vertical takeoff and landing (VTOL) capabilities of multicopter UAVs. This paper presents the modeling, control and simulation of a new type of hybrid micro-small UAVs, coined as lifting-wing quadcopters. The airframe orientation of the lifting wing needs to tilt a specific angle often within $ 45$ degrees, neither nearly $ 90$ nor approximately $ 0$ degrees. Compared with some convertiplane and tail-sitter UAVs, the lifting-wing quadcopter has a highly reliable structure, robust wind resistance, low cruise speed and reliable transition flight, making it potential to work fully-autonomous outdoor or some confined airspace indoor. In the modeling part, forces and moments generated by both lifting wing and rotors are considered. Based on the established model, a unified controller for the full flight phase is designed. The controller has the capability of uniformly treating the hovering and forward flight, and enables a continuous transition between two modes, depending on the velocity command. What is more, by taking rotor thrust and aerodynamic force under consideration simultaneously, a control allocation based on optimization is utilized to realize cooperative control for energy saving. Finally, comprehensive Hardware-In-the-Loop (HIL) simulations are performed to verify the advantages of the designed aircraft and the proposed controller.
translated by 谷歌翻译
Due to their ability to offer more comprehensive information than data from a single view, multi-view (multi-source, multi-modal, multi-perspective, etc.) data are being used more frequently in remote sensing tasks. However, as the number of views grows, the issue of data quality becomes more apparent, limiting the potential benefits of multi-view data. Although recent deep neural network (DNN) based models can learn the weight of data adaptively, a lack of research on explicitly quantifying the data quality of each view when fusing them renders these models inexplicable, performing unsatisfactorily and inflexible in downstream remote sensing tasks. To fill this gap, in this paper, evidential deep learning is introduced to the task of aerial-ground dual-view remote sensing scene classification to model the credibility of each view. Specifically, the theory of evidence is used to calculate an uncertainty value which describes the decision-making risk of each view. Based on this uncertainty, a novel decision-level fusion strategy is proposed to ensure that the view with lower risk obtains more weight, making the classification more credible. On two well-known, publicly available datasets of aerial-ground dual-view remote sensing images, the proposed approach achieves state-of-the-art results, demonstrating its effectiveness. The code and datasets of this article are available at the following address: https://github.com/gaopiaoliang/Evidential.
translated by 谷歌翻译